25 research outputs found

    Dual Content Semantics, privative adjectives and dynamic compositionality

    Get PDF
    This paper defends the view that common nouns have a dual semantic structure that includes extension-determining and non-extension-determining components. I argue that the non-extension-determining components are part of linguistic meaning because they play a key compositional role in certain constructions, especially in privative noun phrases such as "fake gun" and "counterfeit document". Furthermore, I show that if we modify the compositional interpretation rules in certain simple ways, this dual content account of noun phrase modification can be implemented in a type-driven formal semantic framework. In addition, I also argue against traditional accounts of privative noun phrases which can be paired with the assumption that nouns do not have a dual semantic structure. At the most general level, this paper presents a proposal for how we can begin to integrate a psychologically realistic account of lexical semantics with a linguistically plausible compositional semantic framework

    The Structure of Semantic Competence: Compositionality as an Innate Constraint of The Faculty of Language

    Get PDF
    This paper defends the view that the Faculty of Language is compositional, i.e., that it computes the meaning of complex expressions from the meanings of their immediate constituents and their structure. I fargue that compositionality and other competing constraints on the way in which the Faculty of Language computes the meanings of complex expressions should be understood as hypotheses about innate constraints of the Faculty of Language. I then argue that, unlike compositionality, most of the currently available non-compositional constraints predict incorrect patterns of early linguistic development. This supports the view that the Faculty of Language is com- positional. More generally, this paper presents a way of framing the compositionality debate (by focusing on its implications for language acquisition) that can lead to its even- tual resolution, so it will hopefully also interest theorists who disagree with its main conclusion

    The Logicality of Language: A new take on Triviality, “Ungrammaticality”, and Logical Form

    Get PDF
    Recent work in formal semantics suggests that the language system includes not only a structure building device, as standardly assumed, but also a natural deductive system which can determine when expressions have trivial truth‐conditions (e.g., are logically true/false) and mark them as unacceptable. This hypothesis, called the ‘logicality of language’, accounts for many acceptability patterns, including systematic restrictions on the distribution of quantifiers. To deal with apparent counter‐examples consisting of acceptable tautologies and contradictions, the logicality of language is often paired with an additional assumption according to which logical forms are radically underspecified: i.e., the language system can see functional terms but is ‘blind’ to open class terms to the extent that different tokens of the same term are treated as if independent. This conception of logical form has profound implications: it suggests an extreme version of the modularity of language, and can only be paired with non‐classical—indeed quite exotic—kinds of deductive systems. The aim of this paper is to show that we can pair the logicality of language with a different and ultimately more traditional account of logical form. This framework accounts for the basic acceptability patterns which motivated the logicality of language, can explain why some tautologies and contradictions are acceptable, and makes better predictions in key cases. As a result, we can pursue versions of the logicality of language in frameworks compatible with the view that the language system is not radically modular vis‐á‐vis its open class terms and employs a deductive system that is basically classical

    Prototypes as compositional components of concepts

    Get PDF

    The Logicality of Language: Contextualism versus Semantic Minimalism

    Get PDF
    The logicality of language is the hypothesis that the language system has access to a ‘natural’ logic that can identify and filter out as unacceptable expressions that have trivial meanings—that is, that are true/false in all possible worlds or situations in which they are defined. This hypothesis helps explain otherwise puzzling patterns concerning the distribution of various functional terms and phrases. Despite its promise, logicality vastly over-generates unacceptability assignments. Most solutions to this problem rest on specific stipulations about the properties of logical form—roughly, the level of linguistic representation which feeds into the interpretation procedures—and have substantial implications for traditional philosophical disputes about the nature of language. Specifically, contextualism and semantic minimalism, construed as competing hypotheses about the nature and degree of context-sensitivity at the level of logical form, suggest different approaches to the over-generation problem. In this paper, I explore the implications of pairing logicality with various forms of contextualism and semantic minimalism. I argue that to adequately solve the over-generation problem, logicality should be implemented in a constrained contextualist framework

    Oddness, modularity, and exhaustification

    Get PDF
    According to the `grammatical account', scalar implicatures are triggered by a covert exhaustification operator present in logical form. This account covers considerable empirical ground, but there is a peculiar pattern that resists treatment given its usual implementation. The pattern centers on odd assertions like #"Most lions are mammals" and #"Some Italians come from a beautiful country", which seem to trigger implicatures in contexts where the enriched readings conflict with information in the common ground. Magri (2009, 2011) argues that, to account for these cases, the basic grammatical approach has to be supplemented with the stipulations that exhaustification is obligatory and is based on formal computations which are blind to information in the common ground. In this paper, I argue that accounts of oddness should allow for the possibility of felicitous assertions that call for revision of the common ground, including explicit assertions of unusual beliefs such as "Most but not all lions are mammals" and "Some but not all Italians come from Italy". To adequately cover these and similar cases, I propose that Magri's version of the Grammatical account should be refined with the novel hypothesis that exhaustification triggers a bifurcation between presupposed (the negated relevant alternatives) and at-issue (the prejacent) content. The explanation of the full oddness pattern, including cases of felicitous proposals to revise the common ground, follows from the interaction between presupposed and at-issue content with an independently motivated constraint on accommodation. Finally, I argue that treating the exhaustification operator as a presupposition trigger helps solve various independent puzzles faced by extant grammatical accounts, and motivates a substantial revision of standard accounts of the overt exhaustifier "only"

    Probabilistic semantics for epistemic modals: normality assumptions, conditional epistemic spaces, and the strength of `must' and `might'

    Get PDF
    The epistemic modal auxiliaries 'must' and 'might' are vehicles for expressing the force with which a proposition follows from some body of evidence or information. Standard approaches model these operators using quantificational modal logic, but probabilistic approaches are becoming increasingly influential. According to a traditional view, 'must' is a maximally strong epistemic operator and 'might' is a bare possibility one. A competing account---popular amongst proponents of a probabilisitic turn---says that, given a body of evidence, 'must p' entails that Pr(p) is high but non-maximal and 'might p' that Pr(p) is significantly greater than 0. Drawing on several observations concerning the behavior of 'must', 'might' and similar epistemic operators in evidential contexts, deductive inferences, downplaying and retractions scenarios, and expressions of epistemic tension, I argue that those two influential accounts have systematic descriptive shortcomings. To better make sense of their complex behavior, I propose instead a broadly Kratzerian account according to which 'must p' entails that Pr(p) = 1 and 'might p' that Pr(p) \u3e 0, given a body of evidence and a set of normality assumptions about the world. From this perspective, 'must' and 'might' are vehicles for expressing a common mode of reasoning whereby we draw inferences from specific bits of evidence against a rich set of background assumptions---some of which we represent as defeasible---which capture our general expectations about the world. I will show that the predictions of this Kratzerian account can be substantially refined once it is combined with a specific yet independently motivated `grammatical' approach to the computation of scalar implicatures. Finally, I discuss some implications of these results for more general discussions concerning the empirical and theoretical motivation to adopt a probabilisitic semantic framework

    Asymmetry effects in generic and quantified generalizations

    Full text link
    Generic statements ('Tigers have stripes') are pervasive and early-emerging modes of generalization with a distinctive linguistic profile. Previous experimental work found that generics display a unique asymmetry between their acceptance conditions and the implications that are typically drawn from them. This paper presents evidence against the hypothesis that only generics display an asymmetry. Correcting for limitations of previous designs, we found a generalized asymmetry effect across generics, various kinds of explicitly quantified statements ('most', 'some', 'typically', 'usually'), and variations in types of predicated properties (striking vs. neutral). We discuss implications of these results for our understanding of the source of asymmetry effects and whether and in which ways these effects might introduce biased beliefs into social networks

    Dual character concepts in social cognition: Commitments and the normative dimension of conceptual representation

    Get PDF
    The concepts expressed by social role terms such as artist and scientist are unique in that they seem to allow two independent criteria for categorization, one of which is inherently normative (Knobe et al., 2013). This paper presents and tests an account of the content and structure of the normative dimension of these ‘dual character con- cepts’. Experiment 1 suggests that the normative dimension of a social role concept represents the commitment to fulfill the idealised basic function associated with the role. Background information can a↔ect which basic function is associated with each social role. However, Experiment 2 indicates that the normative dimension always represents the relevant commitment as an end in itself. We argue that social role concepts represent the commitments to basic functions because that information is crucial to predict the future social roles and role-dependent behavior of others
    corecore